The Grasp Strategy of a Robot Passer Influences Performance and Quality of the Robot-Human Object Handover

  • Authors: Ortenzi, V; Cini, F; Pardi, T; Marturi, N; Stolkin, R; Corke, P; Controzzi, M;
  • Venue: Frontiers in Robotics and AI
  • Year: 2020
  • Reviewed by: Daniel Kennedy,

Broad area/overview

This paper demonstrates that humans are perceptive to the position and orientation of grasp during handover tasks. The perceived quality of human-robot interaction can be improved when robots choose more "task aware" grasps. This allows for more ergonomic design of collaborative workspaces and less psychological strain.

Specific Problem

A common task in human-robot interaction is the object handover. While there has been prior work in measuring the perceived quality of these interactions, an object handover is typically performed to help a human complete a subsequent task. Therefore, any performance metric should account for both the interaction and the completion of its subsequent task.

This paper focuses on how the type of grasp influences the performance and quality of the handover and its subsequent task. Determining how to best grip an object is a complex task and lends itself to several algorithms. However, determining which methods are best for any particular situation is understudied, and is another focus of this paper.

Solution Ideas

  • The traditional grasp strategy is to simulate contact forces between the object and the robot end effector to choose the optimal position and orientation to grasp. This method optimizes object stability while providing grasp postures within the operational workspace of the robot. It also requires the object's location and geometry to be fully characterized in advance. This is referred to as a "Task Unaware Grasp". The software and simulation algorithms are provided with the SIMOX toolbox.

  • A novel new grasp strategy presented in this paper is the "Task-oriented Grasp", in which grasp posture is chosen in advance for each object based on its "perceived affordances." Since each object is intended to be used by a person to complete a task, the Task-oriented Grasp chooses a posture which offers the person the object's handle for a more natural and collaborative interaction. The authors propose that both the task performance and perceived quality of handover will improve by offering the human the object's affordances.

  • 5 test objects were chosen from the YCB dataset (Calli et al., 2015a,b, 2017) to represent a variety of tasks and potential grip postures: mustard bottle, screwdriver, scissors, mug, and drill.

  • The KUKA 7-DOF robotic arm and the 5 test objects are hidden behind a screen. The robot randomly chooses an object and a grasp strategy (task unaware or task-oriented), picks up the object, and presents it to a person on the other side of the screen.

  • Torque sensors in the robot detect when the person has a hold on the object and releases the grasp. The person receives the object from the robot and uses it to complete a simple task.

  • The duration of the task is measured and the person is surveyed on other qualitative aspects of the interaction.

  • In all cases the humans preferred the task-oriented grasp strategy in which they were handed an object by its perceived affordances rather than the task-unaware strategy which optimizes stability. People spent less time reconfiguring the object in their hands compared to the unaware grasp and felt the interaction as a whole was much more positive.

Comments

  • The paper presents an interesting testing methodology to compare human-robot interactions and could easily be adapted to other experimental setups. The mixture of quantitative and qualitative performance metrics gives a good overall picture of the strengths and weaknesses of the two grasping strategies.

  • The implementation of any particular grasp strategy is outside the scope of the paper, but both strategies have significant shortcomings that would make them impractical outside of a controlled test environment. The SIMOX algorithm requires full characterization of the location and geometry of the object in advance and is very computationally heavy. The task-oriented grasp does not have any particular implementation defined as of yet and therefore also requires preprogrammed grasp positions.

Future Work

  • There may be links to machine learning and object classification algorithms. This could provide the basis for an automated task-oriented grasp strategy.

Referenced Papers